Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 28
1.
Biomed Mater ; 19(3)2024 Apr 26.
Article En | MEDLINE | ID: mdl-38626778

Accurate segmentation of coronary artery tree and personalized 3D printing from medical images is essential for CAD diagnosis and treatment. The current literature on 3D printing relies solely on generic models created with different software or 3D coronary artery models manually segmented from medical images. Moreover, there are not many studies examining the bioprintability of a 3D model generated by artificial intelligence (AI) segmentation for complex and branched structures. In this study, deep learning algorithms with transfer learning have been employed for accurate segmentation of the coronary artery tree from medical images to generate printable segmentations. We propose a combination of deep learning and 3D printing, which accurately segments and prints complex vascular patterns in coronary arteries. Then, we performed the 3D printing of the AI-generated coronary artery segmentation for the fabrication of bifurcated hollow vascular structure. Our results indicate improved performance of segmentation with the aid of transfer learning with a Dice overlap score of 0.86 on a test set of 10 coronary tomography angiography images. Then, bifurcated regions from 3D models were printed into the Pluronic F-127 support bath using alginate + glucomannan hydrogel. We successfully fabricated the bifurcated coronary artery structures with high length and wall thickness accuracy, however, the outer diameters of the vessels and length of the bifurcation point differ from the 3D models. The extrusion of unnecessary material, primarily observed when the nozzle moves from left to the right vessel during 3D printing, can be mitigated by adjusting the nozzle speed. Moreover, the shape accuracy can also be improved by designing a multi-axis printhead that can change the printing angle in three dimensions. Thus, this study demonstrates the potential of the use of AI-segmented 3D models in the 3D printing of coronary artery structures and, when further improved, can be used for the fabrication of patient-specific vascular implants.


Algorithms , Artificial Intelligence , Coronary Vessels , Printing, Three-Dimensional , Humans , Coronary Vessels/diagnostic imaging , Deep Learning , Imaging, Three-Dimensional/methods , Coronary Angiography/methods , Alginates/chemistry , Computed Tomography Angiography/methods , Software
2.
Diagn Interv Radiol ; 2024 Apr 29.
Article En | MEDLINE | ID: mdl-38682670

The rapid evolution of artificial intelligence (AI), particularly in deep learning, has significantly impacted radiology, introducing an array of AI solutions for interpretative tasks. This paper provides radiology departments with a practical guide for selecting and integrating AI solutions, focusing on interpretative tasks that require the active involvement of radiologists. Our approach is not to list available applications or review scientific evidence, as this information is readily available in previous studies; instead, we concentrate on the essential factors radiology departments must consider when choosing AI solutions. These factors include clinical relevance, performance and validation, implementation and integration, clinical usability, costs and return on investment, and regulations, security, and privacy. We illustrate each factor with hypothetical scenarios to provide a clearer understanding and practical relevance. Through our experience and literature review, we provide insights and a practical roadmap for radiologists to navigate the complex landscape of AI in radiology. We aim to assist in making informed decisions that enhance diagnostic precision, improve patient outcomes, and streamline workflows, thus contributing to the advancement of radiological practices and patient care.

3.
Eur J Radiol ; 173: 111356, 2024 Apr.
Article En | MEDLINE | ID: mdl-38364587

BACKGROUND: Explainable Artificial Intelligence (XAI) is prominent in the diagnostics of opaque deep learning (DL) models, especially in medical imaging. Saliency methods are commonly used, yet there's a lack of quantitative evidence regarding their performance. OBJECTIVES: To quantitatively evaluate the performance of widely utilized saliency XAI methods in the task of breast cancer detection on mammograms. METHODS: Three radiologists drew ground-truth boxes on a balanced mammogram dataset of women (n = 1496 cancer-positive and negative scans) from three centers. A modified, pre-trained DL model was employed for breast cancer detection, using MLO and CC images. Saliency XAI methods, including Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, and Eigen-CAM, were evaluated. We utilized the Pointing Game to assess these methods, determining if the maximum value of a saliency map aligned with the bounding boxes, representing the ratio of correctly identified lesions among all cancer patients, with a value ranging from 0 to 1. RESULTS: The development sample included 2,244 women (75%), with the remaining 748 women (25%) in the testing set for unbiased XAI evaluation. The model's recall, precision, accuracy, and F1-Score in identifying cancer in the testing set were 69%, 88%, 80%, and 0.77, respectively. The Pointing Game Scores for Grad-CAM, Grad-CAM++, and Eigen-CAM were 0.41, 0.30, and 0.35 in women with cancer and marginally increased to 0.41, 0.31, and 0.36 when considering only true-positive samples. CONCLUSIONS: While saliency-based methods provide some degree of explainability, they frequently fall short in delineating how DL models arrive at decisions in a considerable number of instances.


Breast Neoplasms , Deep Learning , Humans , Female , Artificial Intelligence , Mammography , Mental Recall , Breast Neoplasms/diagnostic imaging
4.
IEEE Trans Biomed Eng ; 71(3): 855-865, 2024 Mar.
Article En | MEDLINE | ID: mdl-37782583

Cine cardiac magnetic resonance (CMR) imaging is considered the gold standard for cardiac function evaluation. However, cine CMR acquisition is inherently slow and in recent decades considerable effort has been put into accelerating scan times without compromising image quality or the accuracy of derived results. In this article, we present a fully-automated, quality-controlled integrated framework for reconstruction, segmentation and downstream analysis of undersampled cine CMR data. The framework produces high quality reconstructions and segmentations, leading to undersampling factors that are optimised on a scan-by-scan basis. This results in reduced scan times and automated analysis, enabling robust and accurate estimation of functional biomarkers. To demonstrate the feasibility of the proposed approach, we perform simulations of radial k-space acquisitions using in-vivo cine CMR data from 270 subjects from the UK Biobank (with synthetic phase) and in-vivo cine CMR data from 16 healthy subjects (with real phase). The results demonstrate that the optimal undersampling factor varies for different subjects by approximately 1 to 2 seconds per slice. We show that our method can produce quality-controlled images in a mean scan time reduced from 12 to 4 seconds per slice, and that image quality is sufficient to allow clinically relevant parameters to be automatically estimated to lie within 5% mean absolute difference.


Deep Learning , Humans , Magnetic Resonance Imaging, Cine/methods , Heart/diagnostic imaging
5.
Insights Imaging ; 14(1): 110, 2023 Jun 19.
Article En | MEDLINE | ID: mdl-37337101

OBJECTIVE: To evaluate the effectiveness of a self-adapting deep network, trained on large-scale bi-parametric MRI data, in detecting clinically significant prostate cancer (csPCa) in external multi-center data from men of diverse demographics; to investigate the advantages of transfer learning. METHODS: We used two samples: (i) Publicly available multi-center and multi-vendor Prostate Imaging: Cancer AI (PI-CAI) training data, consisting of 1500 bi-parametric MRI scans, along with its unseen validation and testing samples; (ii) In-house multi-center testing and transfer learning data, comprising 1036 and 200 bi-parametric MRI scans. We trained a self-adapting 3D nnU-Net model using probabilistic prostate masks on the PI-CAI data and evaluated its performance on the hidden validation and testing samples and the in-house data with and without transfer learning. We used the area under the receiver operating characteristic (AUROC) curve to evaluate patient-level performance in detecting csPCa. RESULTS: The PI-CAI training data had 425 scans with csPCa, while the in-house testing and fine-tuning data had 288 and 50 scans with csPCa, respectively. The nnU-Net model achieved an AUROC of 0.888 and 0.889 on the hidden validation and testing data. The model performed with an AUROC of 0.886 on the in-house testing data, with a slight decrease in performance to 0.870 using transfer learning. CONCLUSIONS: The state-of-the-art deep learning method using prostate masks trained on large-scale bi-parametric MRI data provides high performance in detecting csPCa in internal and external testing data with different characteristics, demonstrating the robustness and generalizability of deep learning within and across datasets. CLINICAL RELEVANCE STATEMENT: A self-adapting deep network, utilizing prostate masks and trained on large-scale bi-parametric MRI data, is effective in accurately detecting clinically significant prostate cancer across diverse datasets, highlighting the potential of deep learning methods for improving prostate cancer detection in clinical practice.

6.
Eur J Radiol ; 165: 110924, 2023 Aug.
Article En | MEDLINE | ID: mdl-37354768

BACKGROUND: Although systems such as Prostate Imaging Quality (PI-QUAL) have been proposed for quality assessment, visual evaluations by human readers remain somewhat inconsistent, particularly among less-experienced readers. OBJECTIVES: To assess the feasibility of deep learning (DL) for the automated assessment of image quality in bi-parametric MRI scans and compare its performance to that of less-experienced readers. METHODS: We used bi-parametric prostate MRI scans from the PI-CAI dataset in this study. A 3-point Likert scale, consisting of poor, moderate, and excellent, was utilized for assessing image quality. Three expert readers established the ground-truth labels for the development (500) and testing sets (100). We trained a 3D DL model on the development set using probabilistic prostate masks and an ordinal loss function. Four less-experienced readers scored the testing set for performance comparison. RESULTS: The kappa scores between the DL model and the expert consensus for T2W images and ADC maps were 0.42 and 0.61, representing moderate and good levels of agreement. The kappa scores between the less-experienced readers and the expert consensus for T2W images and ADC maps ranged from 0.39 to 0.56 (fair to moderate) and from 0.39 to 0.62 (fair to good). CONCLUSIONS: Deep learning (DL) can offer performance comparable to that of less-experienced readers when assessing image quality in bi-parametric prostate MRI, making it a viable option for an automated quality assessment tool. We suggest that DL models trained on more representative datasets, annotated by a larger group of experts, could yield reliable image quality assessment and potentially substitute or assist visual evaluations by human readers.


Deep Learning , Prostatic Neoplasms , Male , Humans , Prostate/diagnostic imaging , Feasibility Studies , Prostatic Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods
7.
Sci Rep ; 13(1): 8834, 2023 05 31.
Article En | MEDLINE | ID: mdl-37258516

The use of deep learning (DL) techniques for automated diagnosis of large vessel occlusion (LVO) and collateral scoring on computed tomography angiography (CTA) is gaining attention. In this study, a state-of-the-art self-configuring object detection network called nnDetection was used to detect LVO and assess collateralization on CTA scans using a multi-task 3D object detection approach. The model was trained on single-phase CTA scans of 2425 patients at five centers, and its performance was evaluated on an external test set of 345 patients from another center. Ground-truth labels for the presence of LVO and collateral scores were provided by three radiologists. The nnDetection model achieved a diagnostic accuracy of 98.26% (95% CI 96.25-99.36%) in identifying LVO, correctly classifying 339 out of 345 CTA scans in the external test set. The DL-based collateral scores had a kappa of 0.80, indicating good agreement with the consensus of the radiologists. These results demonstrate that the self-configuring 3D nnDetection model can accurately detect LVO on single-phase CTA scans and provide semi-quantitative collateral scores, offering a comprehensive approach for automated stroke diagnostics in patients with LVO.


Brain Ischemia , Stroke , Humans , Computed Tomography Angiography/methods , Stroke/diagnostic imaging , Tomography, X-Ray Computed , Middle Cerebral Artery , Retrospective Studies , Cerebral Angiography/methods
8.
Med Image Anal ; 87: 102808, 2023 07.
Article En | MEDLINE | ID: mdl-37087838

Assessment of myocardial viability is essential in diagnosis and treatment management of patients suffering from myocardial infarction, and classification of pathology on the myocardium is the key to this assessment. This work defines a new task of medical image analysis, i.e., to perform myocardial pathology segmentation (MyoPS) combining three-sequence cardiac magnetic resonance (CMR) images, which was first proposed in the MyoPS challenge, in conjunction with MICCAI 2020. Note that MyoPS refers to both myocardial pathology segmentation and the challenge in this paper. The challenge provided 45 paired and pre-aligned CMR images, allowing algorithms to combine the complementary information from the three CMR sequences for pathology segmentation. In this article, we provide details of the challenge, survey the works from fifteen participants and interpret their methods according to five aspects, i.e., preprocessing, data augmentation, learning strategy, model architecture and post-processing. In addition, we analyze the results with respect to different factors, in order to examine the key obstacles and explore the potential of solutions, as well as to provide a benchmark for future research. The average Dice scores of submitted algorithms were 0.614±0.231 and 0.644±0.153 for myocardial scars and edema, respectively. We conclude that while promising results have been reported, the research is still in the early stage, and more in-depth exploration is needed before a successful application to the clinics. MyoPS data and evaluation tool continue to be publicly available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/).


Benchmarking , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Heart/diagnostic imaging , Myocardium/pathology , Magnetic Resonance Imaging/methods
9.
IEEE Rev Biomed Eng ; 16: 225-240, 2023.
Article En | MEDLINE | ID: mdl-34919522

Since the advent of U-Net, fully convolutional deep neural networks and its many variants have completely changed the modern landscape of deep-learning based medical image segmentation. However, the over-dependence of these methods on pixel-level classification and regression has been identified early on as a problem. Especially when trained on medical databases with sparse available annotation, these methods are prone to generate segmentation artifacts such as fragmented structures, topological inconsistencies and islands of pixel. These artifacts are especially problematic in medical imaging since segmentation is almost always a pre-processing step for some downstream evaluations like surgical planning, visualization, prognosis, or treatment planning. However, one common thread across all these downstream tasks is the demand of anatomical consistency. To ensure the segmentation result is anatomically consistent, approaches based on Markov/ Conditional Random Fields, Statistical Shape Models, Active Contours are becoming increasingly popular over the past 5 years. In this review paper, a broad overview of recent literature on bringing explicit anatomical constraints for medical image segmentation is given, the shortcomings and opportunities are discussed and the potential shift towards implicit shape modelling is elaborated. We review the most relevant papers published until the submission date and provide a tabulated view with method details for quick access.


Deep Learning , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Models, Statistical
10.
Med Phys ; 49(4): 2172-2182, 2022 Apr.
Article En | MEDLINE | ID: mdl-35218024

PURPOSE: To develop a knowledge-based decision-support system capable of stratifying patients for rectal spacer (RS) insertion based on neural network predicted rectal dose, reducing the need for time- and resource-intensive radiotherapy (RT) planning. METHODS: Forty-four patients treated for prostate cancer were enrolled into a clinical trial (NCT03238170). Dose-escalated prostate RT plans were manually created for 30 patients with simulated boost volumes using a conventional treatment planning system (TPS) and used to train a hierarchically dense 3D convolutional neural network to rapidly predict RT dose distributions. The network was used to predict rectal doses for 14 unseen test patients, with associated toxicity risks calculated according to published data. All metrics obtained using the network were compared to conventionally planned values. RESULTS: The neural network stratified patients with an accuracy of 100% based on optimal rectal dose-volume histogram constraints and 78.6% based on mandatory constraints. The network predicted dose-derived grade 2 rectal bleeding risk within 95% confidence limits of -1.9% to +1.7% of conventional risk estimates (risk range 3.5%-9.9%) and late grade 2 fecal incontinence risk within -0.8% to +1.5% (risk range 2.3%-5.7%). Prediction of high-resolution 3D dose distributions took 0.7 s. CONCLUSIONS: The feasibility of using a neural network to provide rapid decision support for RS insertion prior to RT has been demonstrated, and the potential for time and resource savings highlighted. Directly after target and healthy tissue delineation, the network is able to (i) risk stratify most patients with a high degree of accuracy to prioritize which patients would likely derive greatest benefit from RS insertion and (ii) identify patients close to the stratification threshold who would require conventional planning.


Prostate , Prostatic Neoplasms , Humans , Male , Neural Networks, Computer , Prostatic Neoplasms/radiotherapy , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted , Rectum
11.
Sci Rep ; 12(1): 2084, 2022 02 08.
Article En | MEDLINE | ID: mdl-35136123

To investigate the performance of a joint convolutional neural networks-recurrent neural networks (CNN-RNN) using an attention mechanism in identifying and classifying intracranial hemorrhage (ICH) on a large multi-center dataset; to test its performance in a prospective independent sample consisting of consecutive real-world patients. All consecutive patients who underwent emergency non-contrast-enhanced head CT in five different centers were retrospectively gathered. Five neuroradiologists created the ground-truth labels. The development dataset was divided into the training and validation set. After the development phase, we integrated the deep learning model into an independent center's PACS environment for over six months for assessing the performance in a real clinical setting. Three radiologists created the ground-truth labels of the testing set with a majority voting. A total of 55,179 head CT scans of 48,070 patients, 28,253 men (58.77%), with a mean age of 53.84 ± 17.64 years (range 18-89) were enrolled in the study. The validation sample comprised 5211 head CT scans, with 991 being annotated as ICH-positive. The model's binary accuracy, sensitivity, and specificity on the validation set were 99.41%, 99.70%, and 98.91, respectively. During the prospective implementation, the model yielded an accuracy of 96.02% on 452 head CT scans with an average prediction time of 45 ± 8 s. The joint CNN-RNN model with an attention mechanism yielded excellent diagnostic accuracy in assessing ICH and its subtypes on a large-scale sample. The model was seamlessly integrated into the radiology workflow. Though slightly decreased performance, it provided decisions on the sample of consecutive real-world patients within a minute.


Deep Learning , Intracranial Hemorrhage, Traumatic/diagnostic imaging , Tomography, X-Ray Computed , Adolescent , Adult , Aged , Aged, 80 and over , Emergency Service, Hospital , Female , Humans , Male , Middle Aged , Prospective Studies , Retrospective Studies , Young Adult
12.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 8766-8778, 2022 12.
Article En | MEDLINE | ID: mdl-32886606

We introduce a method for training neural networks to perform image or volume segmentation in which prior knowledge about the topology of the segmented object can be explicitly provided and then incorporated into the training process. By using the differentiable properties of persistent homology, a concept used in topological data analysis, we can specify the desired topology of segmented objects in terms of their Betti numbers and then drive the proposed segmentations to contain the specified topological features. Importantly this process does not require any ground-truth labels, just prior knowledge of the topology of the structure being segmented. We demonstrate our approach in four experiments: one on MNIST image denoising and digit recognition, one on left ventricular myocardium segmentation from magnetic resonance imaging data from the UK Biobank, one on the ACDC public challenge dataset and one on placenta segmentation from 3-D ultrasound. We find that embedding explicit prior knowledge in neural network segmentation tasks is most beneficial when the segmentation task is especially challenging and that it can be used in either a semi-supervised or post-processing context to extract a useful training gradient from images without pixelwise labels.


Deep Learning , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Algorithms , Neural Networks, Computer , Magnetic Resonance Imaging/methods
13.
IEEE Trans Biomed Eng ; 69(4): 1398-1405, 2022 04.
Article En | MEDLINE | ID: mdl-34591755

OBJECTIVE: Magnetic Resonance Fingerprinting (MRF) enables simultaneous mapping of multiple tissue parameters such as T1 and T2 relaxation times. The working principle of MRF relies on varying acquisition parameters pseudo-randomly, so that each tissue generates its unique signal evolution during scanning. Even though MRF provides faster scanning, it has disadvantages such as erroneous and slow generation of the corresponding parametric maps, which needs to be improved. Moreover, there is a need for explainable architectures for understanding the guiding signals to generate accurate parametric maps. METHODS: In this paper, we addressed both of these shortcomings by proposing a novel neural network architecture (CONV-ICA) consisting of a channel-wise attention module and a fully convolutional network. Another contribution of this study is a new channel selection method: attention-based channel selection. Furthermore, the effect of patch size and temporal frames of MRF signal on channel reduction are analyzed by employing a channel-wise attention. RESULTS: The proposed approach, evaluated over 3 simulated MRF signals, reduces error in the reconstruction of tissue parameters by 8.88% for T1 and 75.44% for T2 with respect to state-of-the-art methods. CONCLUSION: It is demonstrated that channel attention mechanism helps to focus on informative channels and fully convolutional network extracts spatial information achieve the best reconstruction performance. SIGNIFICANCE: As a consequence of improvement in fast and accurate manner, presented work can contribute to make MRF appropriate for clinical use.


Brain , Image Processing, Computer-Assisted , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Magnetic Resonance Spectroscopy , Neural Networks, Computer
14.
Sci Rep ; 11(1): 12434, 2021 06 14.
Article En | MEDLINE | ID: mdl-34127692

There is little evidence on the applicability of deep learning (DL) in the segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) between magnetic resonance imaging (MRI) scanners of different manufacturers. We retrospectively included DWI data of patients with acute ischemic lesions from six centers. Dataset A (n = 2986) and B (n = 3951) included data from Siemens and GE MRI scanners, respectively. The datasets were split into the training (80%), validation (10%), and internal test (10%) sets, and six neuroradiologists created ground-truth masks. Models A and B were the proposed neural networks trained on datasets A and B. The models subsequently fine-tuned across the datasets using their validation data. Another radiologist performed the segmentation on the test sets for comparisons. The median Dice scores of models A and B were 0.858 and 0.857 for the internal tests, which were non-inferior to the radiologist's performance, but demonstrated lower performance than the radiologist on the external tests. Fine-tuned models A and B achieved median Dice scores of 0.832 and 0.846, which were non-inferior to the radiologist's performance on the external tests. The present work shows that the inter-vendor operability of deep learning for the segmentation of ischemic lesions on DWI might be enhanced via transfer learning; thereby, their clinical applicability and generalizability could be improved.


Deep Learning/statistics & numerical data , Diffusion Magnetic Resonance Imaging/instrumentation , Image Interpretation, Computer-Assisted/instrumentation , Ischemic Stroke/diagnosis , Radiologists/statistics & numerical data , Aged , Aged, 80 and over , Brain/diagnostic imaging , Datasets as Topic , Female , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Male , Middle Aged , Retrospective Studies
15.
IEEE J Biomed Health Inform ; 25(9): 3541-3553, 2021 09.
Article En | MEDLINE | ID: mdl-33684050

Automatic quantification of the left ventricle (LV) from cardiac magnetic resonance (CMR) images plays an important role in making the diagnosis procedure efficient, reliable, and alleviating the laborious reading work for physicians. Considerable efforts have been devoted to LV quantification using different strategies that include segmentation-based (SG) methods and the recent direct regression (DR) methods. Although both SG and DR methods have obtained great success for the task, a systematic platform to benchmark them remains absent because of differences in label information during model learning. In this paper, we conducted an unbiased evaluation and comparison of cardiac LV quantification methods that were submitted to the Left Ventricle Quantification (LVQuan) challenge, which was held in conjunction with the Statistical Atlases and Computational Modeling of the Heart (STACOM) workshop at the MICCAI 2018. The challenge was targeted at the quantification of 1) areas of LV cavity and myocardium, 2) dimensions of the LV cavity, 3) regional wall thicknesses (RWT), and 4) the cardiac phase, from mid-ventricle short-axis CMR images. First, we constructed a public quantification dataset Cardiac-DIG with ground truth labels for both the myocardium mask and these quantification targets across the entire cardiac cycle. Then, the key techniques employed by each submission were described. Next, quantitative validation of these submissions were conducted with the constructed dataset. The evaluation results revealed that both SG and DR methods can offer good LV quantification performance, even though DR methods do not require densely labeled masks for supervision. Among the 12 submissions, the DR method LDAMT offered the best performance, with a mean estimation error of 301 mm 2 for the two areas, 2.15 mm for the cavity dimensions, 2.03 mm for RWTs, and a 9.5% error rate for the cardiac phase classification. Three of the SG methods also delivered comparable performances. Finally, we discussed the advantages and disadvantages of SG and DR methods, as well as the unsolved problems in automatic cardiac quantification for clinical practice applications.


Heart Ventricles , Magnetic Resonance Imaging, Cine , Heart , Heart Ventricles/diagnostic imaging , Humans , Magnetic Resonance Imaging
16.
Br J Radiol ; 94(1120): 20200026, 2021 Apr 01.
Article En | MEDLINE | ID: mdl-33684314

OBJECTIVES: Mandible osteoradionecrosis (ORN) is one of the most severe toxicities in patients with head and neck cancer (HNC) undergoing radiotherapy (RT). The existing literature focuses on the correlation of mandible ORN and clinical and dosimetric factors. This study proposes the use of machine learning (ML) methods as prediction models for mandible ORN incidence. METHODS: A total of 96 patients (ORN incidence ratio of 1:1) treated between 2011 and 2015 were selected from the local HNC toxicity database. Demographic, clinical and dosimetric data (based on the mandible dose-volume histogram) were considered as model variables. Prediction accuracy (measured using a stratified fivefold nested cross-validation), sensitivity, specificity, precision and negative predictive value were used to evaluate the prediction performance of a multivariate logistic regression (LR) model, a support vector machine (SVM) model, a random forest (RF) model, an adaptive boosting (AdaBoost) model and an artificial neural network (ANN) model. The different models were compared based on their prediction accuracy and using the McNemar's hypothesis test. RESULTS: The ANN model (77% accuracy), closely followed by the SVM (76%), AdaBoost (75%) and LR (75%) models, showed the highest overall prediction accuracy. The RF model (71%) showed the lowest prediction accuracy. However, based on the McNemar's test applied to all model pair combinations, no statistically significant difference between the models was found. CONCLUSION: Based on our results, we encourage the use of ML-based prediction models for ORN incidence as has already been done for other HNC toxicity end points. ADVANCES IN KNOWLEDGE: This research opens a new path towards personalised RT for HNC using ML to predict mandible ORN incidence.


Head and Neck Neoplasms/radiotherapy , Machine Learning , Mandible/radiation effects , Osteoradionecrosis/diagnosis , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Female , Head and Neck Neoplasms/diagnostic imaging , Humans , Incidence , Male , Mandible/diagnostic imaging , Middle Aged , Predictive Value of Tests , Reproducibility of Results , Sensitivity and Specificity
17.
Comput Methods Programs Biomed ; 199: 105909, 2021 Feb.
Article En | MEDLINE | ID: mdl-33373815

BACKGROUND AND OBJECTIVE: Brain MRI is one of the most commonly used diagnostic imaging tools to detect neurodegenerative disease. Diagnostic image quality is a key factor to enable robust image analysis algorithms developed for downstream tasks such as segmentation. In clinical practice, one of the main challenges is the presence of image artefacts, which can lead to low diagnostic image quality. METHODS: In this paper, we propose using dense convolutional neural networks to detect and a residual U-net architecture to correct motion related brain MRI artefacts. We first generate synthetic artefacts using an MR physics based corruption strategy. Then, we use a detection strategy based on dense convolutional neural network to detect artefacts. The detected artefacts are corrected using a residual U-net network trained on corrupted data. RESULTS: Our pipeline for detection and correction of artefacts is capable of reaching not only better quality image quality, but also better segmentation accuracy of stroke segmentation. The algorithm is validated using 28 cases brain MRI stroke segmentation dataset and showed an accuracy of 97.8% for detecting artefacts in our experiments. We also illustrated the improved image quality and segmentation accuracy with the proposed correction algorithm. CONCLUSIONS: Ensuring high image quality and high segmentation quality jointly can improve the automatic image analysis pipelines and reduce the influence of low image quality on final prognosis. With this work, we illustrate a performance analysis on brain MRI stroke segmentation.


Artifacts , Neurodegenerative Diseases , Brain/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Neural Networks, Computer
18.
IEEE Trans Med Imaging ; 39(12): 4001-4010, 2020 12.
Article En | MEDLINE | ID: mdl-32746141

Segmenting anatomical structures in medical images has been successfully addressed with deep learning methods for a range of applications. However, this success is heavily dependent on the quality of the image that is being segmented. A commonly neglected point in the medical image analysis community is the vast amount of clinical images that have severe image artefacts due to organ motion, movement of the patient and/or image acquisition related issues. In this paper, we discuss the implications of image motion artefacts on cardiac MR segmentation and compare a variety of approaches for jointly correcting for artefacts and segmenting the cardiac cavity. The method is based on our recently developed joint artefact detection and reconstruction method, which reconstructs high quality MR images from k-space using a joint loss function and essentially converts the artefact correction task to an under-sampled image reconstruction task by enforcing a data consistency term. In this paper, we propose to use a segmentation network coupled with this in an end-to-end framework. Our training optimises three different tasks: 1) image artefact detection, 2) artefact correction and 3) image segmentation. We train the reconstruction network to automatically correct for motion-related artefacts using synthetically corrupted cardiac MR k-space data and uncorrected reconstructed images. Using a test set of 500 2D+time cine MR acquisitions from the UK Biobank data set, we achieve demonstrably good image quality and high segmentation accuracy in the presence of synthetic motion artefacts. We showcase better performance compared to various image correction architectures.


Artifacts , Deep Learning , Image Processing, Computer-Assisted , Heart/diagnostic imaging , Humans , Motion
19.
Magn Reson Imaging ; 70: 155-167, 2020 07.
Article En | MEDLINE | ID: mdl-32353528

PURPOSE: To enable fast reconstruction of undersampled motion-compensated whole-heart 3D coronary magnetic resonance angiography (CMRA) by learning a multi-scale variational neural network (MS-VNN) which allows the acquisition of high-quality 1.2 × 1.2 × 1.2 mm isotropic volumes in a short and predictable scan time. METHODS: Eighteen healthy subjects and one patient underwent free-breathing 3D CMRA acquisition with variable density spiral-like Cartesian sampling, combined with 2D image navigators for translational motion estimation/compensation. The proposed MS-VNN learns two sets of kernels and activation functions for the magnitude and phase images of the complex-valued data. For the magnitude, a multi-scale approach is applied to better capture the small calibre of the coronaries. Ten subjects were considered for training and validation. Prospectively undersampled motion-compensated data with 5-fold and 9-fold accelerations, from the remaining 9 subjects, were used to evaluate the framework. The proposed approach was compared to Wavelet-based compressed-sensing (CS), conventional VNN, and to an additional fully-sampled (FS) scan. RESULTS: The average acquisition time (m:s) was 4:11 for 5-fold, 2:34 for 9-fold acceleration and 18:55 for fully-sampled. Reconstruction time with the proposed MS-VNN was ~14 s. The proposed MS-VNN achieves higher image quality than CS and VNN reconstructions, with quantitative right coronary artery sharpness (CS:43.0%, VNN:43.9%, MS-VNN:47.0%, FS:50.67%) and vessel length (CS:7.4 cm, VNN:7.7 cm, MS-VNN:8.8 cm, FS:9.1 cm) comparable to the FS scan. CONCLUSION: The proposed MS-VNN enables 5-fold and 9-fold undersampled CMRA acquisitions with comparable image quality that the corresponding fully-sampled scan. The proposed framework achieves extremely fast reconstruction time and does not require tuning of regularization parameters, offering easy integration into clinical workflow.


Coronary Angiography , Coronary Vessels/diagnostic imaging , Heart/diagnostic imaging , Imaging, Three-Dimensional/methods , Magnetic Resonance Angiography , Movement , Neural Networks, Computer , Adult , Female , Humans , Male , Reproducibility of Results , Respiration
20.
Sci Rep ; 10(1): 2748, 2020 02 17.
Article En | MEDLINE | ID: mdl-32066744

We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the shortcomings of current training strategies and highlight the need for developing new optimal metrics to accurately quantify the clinical applicability of methods.


Algorithms , Artifacts , Endoscopy/standards , Image Interpretation, Computer-Assisted/standards , Imaging, Three-Dimensional/standards , Neural Networks, Computer , Colon/diagnostic imaging , Colon/pathology , Datasets as Topic , Endoscopy/statistics & numerical data , Esophagus/diagnostic imaging , Esophagus/pathology , Female , Humans , Image Interpretation, Computer-Assisted/statistics & numerical data , Imaging, Three-Dimensional/statistics & numerical data , International Cooperation , Male , Stomach/diagnostic imaging , Stomach/pathology , Urinary Bladder/diagnostic imaging , Urinary Bladder/pathology , Uterus/diagnostic imaging , Uterus/pathology
...